Adversarial Objectives for Text Generation

نویسنده

  • Ankit Vani
چکیده

Language models can be used to generate text by iteratively sampling words conditioned on previously sampled words. In this work, we explore adversarial objectives to obtain good text generations by training a recurrent language model to keep its hidden state statistics during sampling similar to what it has sees during maximum likelihood estimation (MLE) training. We analyze the convergence of these models and discuss the effect of the adversarial objective on word-level and character-level language models. We find that using an adversarial objective assists the MLE objective and results in faster convergence and lower validation perplexities for character-level language models. We also collect sentence quality ratings from human participants and show that character-level language models with the adversarial objective generates qualitatively better sentences than the standard character-level recurrent language model.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improvement of generative adversarial networks for automatic text-to-image generation

This research is related to the use of deep learning tools and image processing technology in the automatic generation of images from text. Previous researches have used one sentence to produce images. In this research, a memory-based hierarchical model is presented that uses three different descriptions that are presented in the form of sentences to produce and improve the image. The proposed ...

متن کامل

DANCin SEQ2SEQ: Fooling Text Classifiers with Adversarial Text Example Generation

Machine learning models are powerful but fallible. Generating adversarial examples inputs deliberately crafted to cause model misclassification or other errors can yield important insight into model assumptions and vulnerabilities. Despite significant recent work on adversarial example generation targeting image classifiers, relatively little work exists exploring adversarial example generation...

متن کامل

Language Generation with Recurrent Generative Adversarial Networks without Pre-training

Generative Adversarial Networks (GANs) have shown great promise recently in image generation. Training GANs for text generation has proven to be more difficult, because of the non-differentiable nature of generating text with recurrent neural networks. Consequently, past work has either resorted to pre-training with maximumlikelihood or used convolutional networks for generation. In this work, ...

متن کامل

DP-GAN: Diversity-Promoting Generative Adversarial Network for Generating Informative and Diversified Text

Existing text generation methods tend to produce repeated and “boring” expressions. To tackle this problem, we propose a new text generation model, called Diversity-Promoting Generative Adversarial Network (DP-GAN). The proposed model assigns low reward for repeated text and high reward for “novel” text, encouraging the generator to produce diverse and informative text. Moreover, we propose a n...

متن کامل

Text Generation using Generative Adversarial Training

Generative models reduce the need of acquiring laborious labeling for the dataset. Text generation techniques can be applied for improving language models, machine translation, summarization, and captioning. This project experiments on different recurrent neural network models to build generative adversarial networks for generating texts from noise. The trained generator is capable of producing...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016